234 Open Daily games
4 Open Realtime games
    Pages:   1   (1 in total)
  1. #1 / 4
    Brigadier General M57 M57 is offline now
    Standard Member M57
    Rank
    Brigadier General
    Rank Posn
    #73
    Join Date
    Apr 10
    Location
    Posts
    5083

    Quite a few years ago, there were a number of threads discussing how to create a system that interpreted Luck Stats objectively. It’s easy to form misconceptions when trying to interpret them at face value. For those interested, I contributed my thoughts on the subject in the wiki.

    https://www.wargear.net/wiki/doku.php?id=general:luck_stats#interpreting_luck_stats

    Long story short, the best method (proposed by Hugh I believe) involved using calculus and it was felt that implementing the system would be complicated and taxing on the site’s resources. Yet, I still wish there was some way of creating a reasonable way of interpreting one’s luck stats over time, and I stumbled on an idea while having a brief private conversation with Litotes on the subject.

    My inspiration for the idea came from thelNYT Wordle ratings interactive,

    https://www.nytimes.com/interactive/2022/upshot/wordle-bot.html

    ..where a Luck rating is computed for every decision you make based on how many words were left as a percentage of the range of possible leftover words. Of course over time this number will converge on 50%. I’ve been keeping track of my and the NYT’s participant’s Luck stats over the last 60+ days. Mine are at 47% (not too surprising with an ~250 guess sample space) and the NYT average is 52%, which BTW suggests to me that some folks cheat (or get hints) because with millions of guesses in the till, I’m pretty sure it should be much closer to 50%, but I digress.. My idea:

    Find the range of possible Luck Stat outcomes over a period of time. E.g. in a game with perfect luck you would have a LS of 100 and if you lost every roll you would have a -90 LS.

    Let’s say you scored -9.0 Luck stat over the course of the game. That’s 10% of how bad it could be (-9/90). Take ½ of that and add it to 50% for a “Luck Rating” of 45%.

    With a score of +9.0: 9/100 = 9%

    9%/2 = 4.5%

    50% + 4.5% = 54.5% Luck Rating.

    Edited Thu 4th Aug 07:44 [history]

  2. #2 / 4
    Premium Member Spider
    Rank
    Lieutenant
    Rank Posn
    #311
    Join Date
    Jan 11
    Location
    Posts
    121

    +1


  3. #3 / 4
    Enginerd weathertop
    Rank
    Brigadier General
    Rank Posn
    #64
    Join Date
    Nov 09
    Location
    Posts
    3020

    i feel like you're saying something here... 😇

    I'm a man.
    But I can change,
    if I have to,
    I guess...

  4. #4 / 4
    Brigadier General M57 M57 is offline now
    Standard Member M57
    Rank
    Brigadier General
    Rank Posn
    #73
    Join Date
    Apr 10
    Location
    Posts
    5083

    Other thoughts and an attempt to self-critique my idea..

    In addition to an overall Luck Rating, sub-ratings could be broken out and appended to the existing Luck Stat tables. I.e. by Attacking, Defending, and Opponent.

    While I do think that the z-score method that Hugh proposed a few years ago is a stronger system, Tom indicated that it would require too many resources to implement.

    I'm pretty sure that shorter games (games with less rolls) will produce a higher number of outlier scores. An extreme example would be a game where a player loses all of the first 10 or so rolls in a row and is eliminated, resulting in a LR of 0%. Of course, the chances of this happening in a 1000 roll game are miniscule. It's not putting lipstick on the pig, but it is clearly a flawed attempt to precisely evaluate the sum of the luck in a sample space context.

    Questions I have: Will players find it useful? Can its obvious flaws be overlooked? ..or might it just provide a worthlessly subjective layer of argument when players complain about their luck? I'd like to think of it as a reasonable reference point to guide that discussion.

    I guess it's always bothered me that while Luck Stats provide us with a solid analysis of individual data points, they don't take the additional step of taking sample space (or # of rolls) into account. In its favor, this system is simple, accessible (easy to explain and understand) and not resource hungry.

    Edited Mon 17th Oct 11:42 [history]

You need to log in to reply to this thread   Login | Join
 
Pages:   1   (1 in total)